1,362 research outputs found

    Association Between Risk-of-Bias Assessments and Results of Randomized Trials in Cochrane Reviews: The ROBES Meta-Epidemiologic Study.

    Get PDF
    Flaws in the design of randomized trials may bias intervention effect estimates and increase between-trial heterogeneity. Empirical evidence suggests that these problems are greatest for subjectively assessed outcomes. For the Risk of Bias in Evidence Synthesis (ROBES) Study, we extracted risk-of-bias judgements (for sequence generation, allocation concealment, blinding, and incomplete data) from a large collection of meta-analyses published in the Cochrane Library (issue 4; April 2011). We categorized outcome measures as mortality, other objective outcome, or subjective outcome, and we estimated associations of bias judgements with intervention effect estimates using Bayesian hierarchical models. Among 2,443 randomized trials in 228 meta-analyses, intervention effect estimates were, on average, exaggerated in trials with high or unclear (versus low) risk-of-bias judgements for sequence generation (ratio of odds ratios (ROR) = 0.91, 95% credible interval (CrI): 0.86, 0.98), allocation concealment (ROR = 0.92, 95% CrI: 0.86, 0.98), and blinding (ROR = 0.87, 95% CrI: 0.80, 0.93). In contrast to previous work, we did not observe consistently different bias for subjective outcomes compared with mortality. However, we found an increase in between-trial heterogeneity associated with lack of blinding in meta-analyses with subjective outcomes. Inconsistency in criteria for risk-of-bias judgements applied by individual reviewers is a likely limitation of routinely collected bias assessments. Inadequate randomization and lack of blinding may lead to exaggeration of intervention effect estimates in randomized trials

    MultiBUGS: A Parallel Implementation of the BUGS Modeling Framework for Faster Bayesian Inference

    Get PDF
    MultiBUGS is a new version of the general-purpose Bayesian modeling software BUGS that implements a generic algorithm for parallelizing Markov chain Monte Carlo (MCMC) algorithms to speed up posterior inference of Bayesian models. The algorithm parallelizes evaluation of the product-form likelihoods formed when a parameter has many children in the directed acyclic graph (DAG) representation; and parallelizes sampling of conditionally-independent sets of parameters. A heuristic algorithm is used to decide which approach to use for each parameter and to apportion computation across computational cores. This enables MultiBUGS to automatically parallelize the broad range of statistical models that can be fitted using BUGS-language software, making the dramatic speed-ups of modern multi-core computing accessible to applied statisticians, without requiring any experience of parallel programming. We demonstrate the use of MultiBUGS on simulated data designed to mimic a hierarchical e-health linked-data study of methadone prescriptions including 425,112 observations and 20,426 random effects. Posterior inference for the e-health model takes several hours in existing software, but MultiBUGS can perform inference in only 28 minutes using 48 computational cores

    Bias modelling in evidence synthesis

    Get PDF
    Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the null by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods

    Implications of analysing time-to-event outcomes as binary in meta-analysis: empirical evidence from the Cochrane Database of Systematic Reviews

    Get PDF
    BACKGROUND: Systematic reviews and meta-analysis of time-to-event outcomes are frequently published within the Cochrane Database of Systematic Reviews (CDSR). However, these outcomes are handled differently across meta-analyses. They can be analysed on the hazard ratio (HR) scale or can be dichotomized and analysed as binary outcomes using effect measures such as odds ratios (OR) or risk ratios (RR). We investigated the impact of reanalysing meta-analyses from the CDSR that used these different effect measures. METHODS: We extracted two types of meta-analysis data from the CDSR: either recorded in a binary form only ("binary"), or in binary form together with observed minus expected and variance statistics ("OEV"). We explored how results for time-to-event outcomes originally analysed as "binary" change when analysed using the complementary log-log (clog-log) link on a HR scale. For the data originally analysed as HRs ("OEV"), we compared these results to analysing them as binary on a HR scale using the clog-log link or using a logit link on an OR scale. RESULTS: The pooled HR estimates were closer to 1 than the OR estimates in the majority of meta-analyses. Important differences in between-study heterogeneity between the HR and OR analyses were also observed. These changes led to discrepant conclusions between the OR and HR scales in some meta-analyses. Situations under which the clog-log link performed better than logit link and vice versa were apparent, indicating that the correct choice of the method does matter. Differences between scales arise mainly when event probability is high and may occur via differences in between-study heterogeneity or via increased within-study standard error in the OR relative to the HR analyses. CONCLUSIONS: We identified that dichotomising time-to-event outcomes may be adequate for low event probabilities but not for high event probabilities. In meta-analyses where only binary data are available, the complementary log-log link may be a useful alternative when analysing time-to-event outcomes as binary, however the exact conditions need further exploration. These findings provide guidance on the appropriate methodology that should be used when conducting such meta-analyses

    The Personalised Randomized Controlled Trial: Evaluation of a new trial design

    Get PDF
    In some clinical scenarios, for example, severe sepsis caused by extensively drug resistant bacteria, there is uncertainty between many common treatments, but a conventional multiarm randomized trial is not possible because individual participants may not be eligible to receive certain treatments. The Personalised Randomized Controlled Trial design allows each participant to be randomized between a “personalised randomization list” of treatments that are suitable for them. The primary aim is to produce treatment rankings that can guide choice of treatment, rather than focusing on the estimates of relative treatment effects. Here we use simulation to assess several novel analysis approaches for this innovative trial design. One of the approaches is like a network meta-analysis, where participants with the same personalised randomization list are like a trial, and both direct and indirect evidence are used. We evaluate this proposed analysis and compare it with analyses making less use of indirect evidence. We also propose new performance measures including the expected improvement in outcome if the trial's rankings are used to inform future treatment rather than random choice. We conclude that analysis of a personalized randomized controlled trial can be performed by pooling data from different types of participants and is robust to moderate subgroup-by-intervention interactions based on the parameters of our simulation. The proposed approach performs well with respect to estimation bias and coverage. It provides an overall treatment ranking list with reasonable precision, and is likely to improve outcome on average if used to determine intervention policies and guide individual clinical decisions

    Between-trial heterogeneity in meta-analyses may be partially explained by reported design characteristics.

    Get PDF
    OBJECTIVE: We investigated the associations between risk of bias judgments from Cochrane reviews for sequence generation, allocation concealment and blinding, and between-trial heterogeneity. STUDY DESIGN AND SETTING: Bayesian hierarchical models were fitted to binary data from 117 meta-analyses, to estimate the ratio λ by which heterogeneity changes for trials at high/unclear risk of bias compared with trials at low risk of bias. We estimated the proportion of between-trial heterogeneity in each meta-analysis that could be explained by the bias associated with specific design characteristics. RESULTS: Univariable analyses showed that heterogeneity variances were, on average, increased among trials at high/unclear risk of bias for sequence generation (λˆ 1.14, 95% interval: 0.57-2.30) and blinding (λˆ 1.74, 95% interval: 0.85-3.47). Trials at high/unclear risk of bias for allocation concealment were on average less heterogeneous (λˆ 0.75, 95% interval: 0.35-1.61). Multivariable analyses showed that a median of 37% (95% interval: 0-71%) heterogeneity variance could be explained by trials at high/unclear risk of bias for sequence generation, allocation concealment, and/or blinding. All 95% intervals for changes in heterogeneity were wide and included the null of no difference. CONCLUSION: Our interpretation of the results is limited by imprecise estimates. There is some indication that between-trial heterogeneity could be partially explained by reported design characteristics, and hence adjustment for bias could potentially improve accuracy of meta-analysis results

    Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    Get PDF
    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd
    • 

    corecore